22 research outputs found

    Review: Object vision in a structured world

    Get PDF
    In natural vision, objects appear at typical locations, both with respect to visual space (e.g., an airplane in the upper part of a scene) and other objects (e.g., a lamp above a table). Recent studies have shown that object vision is strongly adapted to such positional regularities. In this review we synthesize these developments, highlighting that adaptations to positional regularities facilitate object detection and recognition, and sharpen the representations of objects in visual cortex. These effects are pervasive across various types of high-level content. We posit that adaptations to real-world structure collectively support optimal usage of limited cortical processing resources. Taking positional regularities into account will thus be essential for understanding efficient object vision in the real world

    Object Vision in a Structured World

    Get PDF
    In natural vision, objects appear at typical locations, both with respect to visual space (e.g., an airplane in the upper part of a scene) and other objects (e.g., a lamp above a table). Recent studies have shown that object vision is strongly adapted to such positional regularities. In this review we synthesize these developments, highlighting that adaptations to positional regularities facilitate object detection and recognition, and sharpen the representations of objects in visual cortex. These effects are pervasive across various types of high-level content. We posit that adaptations to real-world structure collectively support optimal usage of limited cortical processing resources. Taking positional regularities into account will thus be essential for understanding efficient object vision in the real world

    The modulatory effects of attention and spatial location on masked face-processing: insights from the reach-to-touch paradigm

    No full text
    Thesis by publication.On title page: Department of Cognitive Science, ARC Centre of Excellence in Cognition and its Disorders, Faculty of Human Sciences, Macquarie University, Sydney, Australia.Includes bibliographical references.In masked priming paradigms, targets are responded to faster and more accurately when preceded by subliminal primes from the same category than a different category. Intriguingly, where these congruence priming effects elicited by word and number stimuli depend on the allocation of attention, masked faces produce priming regardless of how well attention is focused. The research presented in this thesis exploits this unique property to examine the temporal dynamics of nonconscious information processing, and the factors which modulate this hidden cognitive process. Using congruence priming effects for masked faces as an index of nonconscious perception, I present four empirical studies that examine how processing below our level of conscious awareness is affected by manipulations of spatial and temporal attention. In Study 1, I show that the allocation of both spatial and temporal attention facilitates nonconscious processing at less than 350ms of stimulus-processing time. These results suggest that attention modulates nonconscious information processing in a graded fashion that mirrors its influence on the perception of consciously presented stimuli. Study 2 investigates the differential benefit of attention between the vertical hemifields, and documents the breakthrough finding that face-processing is supported better in the upper-hemifield than the lower-hemifield. Study 3 explores whether this upper-hemifield advantage generalises to recognition of a nonface object (human hands). Study 4 investigates and dispels the possibility that the pattern of vertical asymmetry effects for face-perception relates to an upward bias in participants' visuospatial attention. The final chapter of this thesis summarises the findings from these four studies and discusses their implications within a broader research context.Mode of access: World wide web1 online resource (xiv, 285, [25] pages) graphs (some colour

    Contextual and spatial associations between objects interactively modulate visual processing

    Get PDF
    Much of what we know about object recognition arises from the study of isolated objects. In the real world, however, we commonly encounter groups of contextually associated objects (e.g., teacup and saucer), often in stereotypical spatial configurations (e.g., teacup above saucer). Here we used electroencephalography to test whether identity-based associations between objects (e.g., teacup–saucer vs. teacup–stapler) are encoded jointly with their typical relative positioning (e.g., teacup above saucer vs. below saucer). Observers viewed a 2.5-Hz image stream of contextually associated object pairs intermixed with nonassociated pairs as every fourth image. The differential response to nonassociated pairs (measurable at 0.625 Hz in 28/37 participants) served as an index of contextual integration, reflecting the association of object identities in each pair. Over right occipitotemporal sites, this signal was larger for typically positioned object streams, indicating that spatial configuration facilitated the extraction of the objects’ contextual association. This high-level influence of spatial configuration on object identity integration arose ∼ 320 ms post-stimulus onset, with lower-level perceptual grouping (shared with inverted displays) present at ∼ 130 ms. These results demonstrate that contextual and spatial associations between objects interactively influence object processing. We interpret these findings as reflecting the high-level perceptual grouping of objects that frequently co-occur in highly stereotyped relative positions

    Critical information thresholds underlying generic and familiar face categorisation at the same face encounter

    Get PDF
    Seeing a face in the real world provokes a host of automatic categorisations related to sex, emotion, identity, and more. Such individual facets of human face recognition have been extensively examined using overt categorisation judgements, yet their relative informational dependencies during the same face encounter are comparatively unknown. Here we used EEG to assess how increasing access to sensory input governs two ecologically relevant brain functions elicited by seeing a face: Distinguishing faces and nonfaces, and recognising people we know. Observers viewed a large set of natural images that progressively increased in either image duration (experiment 1) or spatial frequency content (experiment 2). We show that in the absence of an explicit categorisation task, the human brain requires less sensory input to categorise a stimulus as a face than it does to recognise whether that face is familiar. Moreover, where sensory thresholds for distinguishing faces/nonfaces were remarkably consistent across observers, there was high inter-individual variability in the lower informational bound for familiar face recognition, underscoring the neurofunctional distinction between these categorisation functions. By i) indexing a form of face recognition that goes beyond simple low-level differences between categories, and ii) tapping multiple recognition functions elicited by the same face encounters, the information minima we report bear high relevance to real-world face encounters, where the same stimulus is categorised along multiple dimensions at once. Thus, our finding of lower informational requirements for generic vs. familiar face recognition constitutes some of the strongest evidence to date for the intuitive notion that sensory input demands should be lower for recognising face category than face identity

    Face-sex categorization is better above fixation than below : evidence from the reach-to-touch paradigm

    No full text
    The masked congruence effect (MCE) elicited by nonconsciously presented faces in a sex-categorization task has recently been shown to be sensitive to the effects of attention. Here we investigated how spatial location along the vertical meridian modulates the MCE for face-sex categorization. Participants made left and right reaching movements to classify the sex of a target face that appeared either immediately above or below central fixation. The target was preceded by a masked prime face that was either congruent (i.e., same sex) or incongruent (i.e., opposite sex) with the target. In the reach-to-touch paradigm, participants typically classify targets more efficiently (i.e., their finger heads in the correct direction earlier and faster) on congruent than on incongruent trials. We observed an upper-hemifield advantage in the time course of this MCE, such that primes affected target classification sooner when they were presented in the upper visual field (UVF) rather than the lower visual field (LVF). Moreover, we observed a differential benefit of attention between the vertical hemifields, in that the MCE was dependent on the appropriate allocation of spatial attention in the LVF, but not the UVF. Taken together, these behavioral findings suggest that the processing of faces qua faces (e.g., sex-categorization) is more robust in upper-hemifield locations.13 page(s

    Gaining the upper hand : evidence of vertical asymmetry in sex-categorisation of human hands

    No full text
    Visual perception is characterised by asymmetries arising from the brain's preferential response to particular stimulus types at different retinal locations. Where the lower visual field (LVF) holds an advantage over the upper visual field (UVF) for many tasks (e.g., hue discrimination, contrast sensitivity, motion processing), face-perception appears best supported at above-fixation locations (Quek & Finkbeiner, 2014a). This finding is consistent with Previc's (1990) suggestion that vision in the UVF has become specialised for object recognition processes often required in "extra-personal" space. Outside of faces, however, there have been very few investigations of vertical asymmetry effects for higher-level objects. Our aim in the present study was, thus, to determine whether the UVF advantage reported for face-perception would extend to a nonface object human hands. Participants classified the sex of hand images presented above or below central fixation by reaching out to touch a left or right response panel. On each trial, a briefly presented spatial cue captured the participant's spatial attention to either the location where the hand was about to appear (valid cue) or the opposite location (invalid cue). We observed that cue validity only modulated the efficiency of the sex-categorisation response for targets in the LVF and not the UVF, just as we have reported previously for face-sex categorisation (Quek & Finkbeiner, 2014a). Taken together, the data from these studies provide some empirical support for Previc's (1990) speculation that object recognition processes may enjoy an advantage in the upper-hemifield.13 page(s

    Spatial and temporal attention modulate the early stages of face processing: behavioural evidence from a reaching paradigm.

    Get PDF
    A presently unresolved question within the face perception literature is whether attending to the location of a face modulates face processing (i.e. spatial attention). Opinions on this matter diverge along methodological lines - where neuroimaging studies have observed that the allocation of spatial attention serves to enhance the neural response to a face, findings from behavioural paradigms suggest face processing is carried out independently of spatial attention. In the present study, we reconcile this divide by using a continuous behavioural response measure that indexes face processing at a temporal resolution not available in discrete behavioural measures (e.g. button press). Using reaching trajectories as our response measure, we observed that although participants were able to process faces both when attended and unattended (as others have found), face processing was not impervious to attentional modulation. Attending to the face conferred clear benefits on sex-classification processes at less than 350ms of stimulus processing time. These findings constitute the first reliable demonstration of the modulatory effects of both spatial and temporal attention on face processing within a behavioural paradigm

    Gaining the Upper Hand: Evidence of Vertical Asymmetry in Sex-Categorisation of Human Hands

    No full text
    corecore